9 research outputs found
Testing SOAR Tools in Use
Modern security operation centers (SOCs) rely on operators and a tapestry of
logging and alerting tools with large scale collection and query abilities. SOC
investigations are tedious as they rely on manual efforts to query diverse data
sources, overlay related logs, and correlate the data into information and then
document results in a ticketing system. Security orchestration, automation, and
response (SOAR) tools are a new technology that promise to collect, filter, and
display needed data; automate common tasks that require SOC analysts' time;
facilitate SOC collaboration; and, improve both efficiency and consistency of
SOCs. SOAR tools have never been tested in practice to evaluate their effect
and understand them in use. In this paper, we design and administer the first
hands-on user study of SOAR tools, involving 24 participants and 6 commercial
SOAR tools. Our contributions include the experimental design, itemizing six
characteristics of SOAR tools and a methodology for testing them. We describe
configuration of the test environment in a cyber range, including network,
user, and threat emulation; a full SOC tool suite; and creation of artifacts
allowing multiple representative investigation scenarios to permit testing. We
present the first research results on SOAR tools. We found that SOAR
configuration is critical, as it involves creative design for data display and
automation. We found that SOAR tools increased efficiency and reduced context
switching during investigations, although ticket accuracy and completeness
(indicating investigation quality) decreased with SOAR use. Our findings
indicated that user preferences are slightly negatively correlated with their
performance with the tool; overautomation was a concern of senior analysts, and
SOAR tools that balanced automation with assisting a user to make decisions
were preferred
AI ATAC 1: An Evaluation of Prominent Commercial Malware Detectors
This work presents an evaluation of six prominent commercial endpoint malware
detectors, a network malware detector, and a file-conviction algorithm from a
cyber technology vendor. The evaluation was administered as the first of the
Artificial Intelligence Applications to Autonomous Cybersecurity (AI ATAC)
prize challenges, funded by / completed in service of the US Navy. The
experiment employed 100K files (50/50% benign/malicious) with a stratified
distribution of file types, including ~1K zero-day program executables
(increasing experiment size two orders of magnitude over previous work). We
present an evaluation process of delivering a file to a fresh virtual machine
donning the detection technology, waiting 90s to allow static detection, then
executing the file and waiting another period for dynamic detection; this
allows greater fidelity in the observational data than previous experiments, in
particular, resource and time-to-detection statistics. To execute all 800K
trials (100K files 8 tools), a software framework is designed to
choreographed the experiment into a completely automated, time-synced, and
reproducible workflow with substantial parallelization. A cost-benefit model
was configured to integrate the tools' recall, precision, time to detection,
and resource requirements into a single comparable quantity by simulating costs
of use. This provides a ranking methodology for cyber competitions and a lens
through which to reason about the varied statistical viewpoints of the results.
These statistical and cost-model results provide insights on state of
commercial malware detection
Beyond the Hype: A Real-World Evaluation of the Impact and Cost of Machine Learning-Based Malware Detection
There is a lack of scientific testing of commercially available malware
detectors, especially those that boast accurate classification of
never-before-seen (i.e., zero-day) files using machine learning (ML). The
result is that the efficacy and gaps among the available approaches are opaque,
inhibiting end users from making informed network security decisions and
researchers from targeting gaps in current detectors. In this paper, we present
a scientific evaluation of four market-leading malware detection tools to
assist an organization with two primary questions: (Q1) To what extent do
ML-based tools accurately classify never-before-seen files without sacrificing
detection ability on known files? (Q2) Is it worth purchasing a network-level
malware detector to complement host-based detection? We tested each tool
against 3,536 total files (2,554 or 72% malicious, 982 or 28% benign) including
over 400 zero-day malware, and tested with a variety of file types and
protocols for delivery. We present statistical results on detection time and
accuracy, consider complementary analysis (using multiple tools together), and
provide two novel applications of a recent cost-benefit evaluation procedure by
Iannaconne & Bridges that incorporates all the above metrics into a single
quantifiable cost. While the ML-based tools are more effective at detecting
zero-day files and executables, the signature-based tool may still be an
overall better option. Both network-based tools provide substantial (simulated)
savings when paired with either host tool, yet both show poor detection rates
on protocols other than HTTP or SMTP. Our results show that all four tools have
near-perfect precision but alarmingly low recall, especially on file types
other than executables and office files -- 37% of malware tested, including all
polyglot files, were undetected.Comment: Includes Actionable Takeaways for SOC
The BUFFALO HST Survey
The Beyond Ultra-deep Frontier Fields and Legacy Observations (BUFFALO) is a 101 orbit + 101 parallel Cycle 25 Hubble Space Telescope (HST) Treasury program taking data from 2018 to 2020. BUFFALO will expand existing coverage of the Hubble Frontier Fields (HFF) in Wide Field Camera 3/IR F105W, F125W, and F160W and Advanced Camera for Surveys/WFC F606W and F814W around each of the six HFF clusters and flanking fields. This additional area has not been observed by HST but is already covered by deep multiwavelength data sets, including Spitzer and Chandra. As with the original HFF program, BUFFALO is designed to take advantage of gravitational lensing from massive clusters to simultaneously find high-redshift galaxies that would otherwise lie below HST detection limits and model foreground clusters to study the properties of dark matter and galaxy assembly. The expanded area will provide the first opportunity to study both cosmic variance at high redshift and galaxy assembly in the outskirts of the large HFF clusters. Five additional orbits are reserved for transient followup. BUFFALO data including mosaics, value-added catalogs, and cluster mass distribution models will be released via MAST on a regular basis as the observations and analysis are completed for the six individual clusters
MOONS: The New Multi-Object Spectrograph for the VLT
International audienceMOONS is the new Multi-Object Optical and Near-infrared Spectrograph currently under construction for the Very Large Telescope (VLT) at ESO. This remarkable instrument combines, for the first time, the collecting power of an 8-m telescope, 1000 fibres with individual robotic positioners, and both low- and high-resolution simultaneous spectral coverage across the 0.64–1.8 μm wavelength range. This facility will provide the astronomical community with a powerful, world-leading instrument able to serve a wide range of Galactic, extragalactic and cosmological studies. Construction is now proceeding full steam ahead and this overview article presents some of the science goals and the technical description of the MOONS instrument. More detailed information on the MOONS surveys is provided in the other dedicated articles in this Messenger issue